656 research outputs found
The K-Server Dual and Loose Competitiveness for Paging
This paper has two results. The first is based on the surprising observation
that the well-known ``least-recently-used'' paging algorithm and the
``balance'' algorithm for weighted caching are linear-programming primal-dual
algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that
generalizes them both and has an optimal performance guarantee for weighted
caching.
For the second result, the paper presents empirical studies of paging
algorithms, documenting that in practice, on ``typical'' cache sizes and
sequences, the performance of paging strategies are much better than their
worst-case analyses in the standard model suggest. The paper then presents
theoretical results that support and explain this. For example: on any input
sequence, with almost all cache sizes, either the performance guarantee of
least-recently-used is O(log k) or the fault rate (in an absolute sense) is
insignificant.
Both of these results are strengthened and generalized in``On-line File
Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA
(1991
Incremental Medians via Online Bidding
In the k-median problem we are given sets of facilities and customers, and
distances between them. For a given set F of facilities, the cost of serving a
customer u is the minimum distance between u and a facility in F. The goal is
to find a set F of k facilities that minimizes the sum, over all customers, of
their service costs.
Following Mettu and Plaxton, we study the incremental medians problem, where
k is not known in advance, and the algorithm produces a nested sequence of
facility sets where the kth set has size k. The algorithm is c-cost-competitive
if the cost of each set is at most c times the cost of the optimum set of size
k. We give improved incremental algorithms for the metric version: an
8-cost-competitive deterministic algorithm, a 2e ~ 5.44-cost-competitive
randomized algorithm, a (24+epsilon)-cost-competitive, poly-time deterministic
algorithm, and a (6e+epsilon ~ .31)-cost-competitive, poly-time randomized
algorithm.
The algorithm is s-size-competitive if the cost of the kth set is at most the
minimum cost of any set of size k, and has size at most s k. The optimal
size-competitive ratios for this problem are 4 (deterministic) and e
(randomized). We present the first poly-time O(log m)-size-approximation
algorithm for the offline problem and first poly-time O(log m)-size-competitive
algorithm for the incremental problem.
Our proofs reduce incremental medians to the following online bidding
problem: faced with an unknown threshold T, an algorithm submits "bids" until
it submits a bid that is at least the threshold. It pays the sum of all its
bids. We prove that folklore algorithms for online bidding are optimally
competitive.Comment: conference version appeared in LATIN 2006 as "Oblivious Medians via
Online Bidding
La metacognición y el mejoramiento de la enseñanza de química universitaria
En este trabajo, que es parte de una investigación más extensa, sobre mejoramiento de la enseñanza de química universitaria, se presentan algunos resultados obtenidos luego de aplicar una nueva propuesta de enseñanza, destinada a la comprensión y resolución de problemas sobre el tema «Soluciones». Con el objeto de facilitar el aprendizaje significativo, la propuesta de trabajo incluye el uso de las denominadas herramientas metacognitivas que permitan aplicar metodologías conducentes al logro de dichos aprendizajes por parte de los estudiantes. Luego de aplicar las mencionadas herramientas, se procedió a realizar la evaluación de los estudiantes participantes para obtener datos sobre los logros alcanzados y sus aprendizajes. El análisis de los resultados muestra que el uso del nuevo enfoque instruccional ayuda a los estudiantes en sus procesos de aprendizaje, en la medida que se vayan haciendo conscientes de los mecanismos que se utilizan para obtener aprendizaje significativo.This work, which is part of a more extensive research project on the improvement of Chemistry teaching at university level, presents the results obtained by applying an innovative teaching methodology. This methodology was designed with the objective of helping students to better understand and solve problems regarding the topic "Solutions". In order to facilitate learning, the proposed methodology includes the use of metacognitive tools (concept maps, Gowin's Vee and clinical interviews), which allows the students to apply significant learning methodologies. After applying these tools, we evaluated the students in order to measure their achievements and their learning. The analysis of the results shows that the use of this new instructional approach helps the students in their learning process because they become aware of the mechanism they use to achieve significant learning
Straight-line Drawability of a Planar Graph Plus an Edge
We investigate straight-line drawings of topological graphs that consist of a
planar graph plus one edge, also called almost-planar graphs. We present a
characterization of such graphs that admit a straight-line drawing. The
characterization enables a linear-time testing algorithm to determine whether
an almost-planar graph admits a straight-line drawing, and a linear-time
drawing algorithm that constructs such a drawing, if it exists. We also show
that some almost-planar graphs require exponential area for a straight-line
drawing
First-Fit is Linear on Posets Excluding Two Long Incomparable Chains
A poset is (r + s)-free if it does not contain two incomparable chains of
size r and s, respectively. We prove that when r and s are at least 2, the
First-Fit algorithm partitions every (r + s)-free poset P into at most
8(r-1)(s-1)w chains, where w is the width of P. This solves an open problem of
Bosek, Krawczyk, and Szczypka (SIAM J. Discrete Math., 23(4):1992--1999, 2010).Comment: v3: fixed some typo
Information Gathering in Ad-Hoc Radio Networks with Tree Topology
We study the problem of information gathering in ad-hoc radio networks
without collision detection, focussing on the case when the network forms a
tree, with edges directed towards the root. Initially, each node has a piece of
information that we refer to as a rumor. Our goal is to design protocols that
deliver all rumors to the root of the tree as quickly as possible. The protocol
must complete this task within its allotted time even though the actual tree
topology is unknown when the computation starts. In the deterministic case,
assuming that the nodes are labeled with small integers, we give an O(n)-time
protocol that uses unbounded messages, and an O(n log n)-time protocol using
bounded messages, where any message can include only one rumor. We also
consider fire-and-forward protocols, in which a node can only transmit its own
rumor or the rumor received in the previous step. We give a deterministic
fire-and- forward protocol with running time O(n^1.5), and we show that it is
asymptotically optimal. We then study randomized algorithms where the nodes are
not labelled. In this model, we give an O(n log n)-time protocol and we prove
that this bound is asymptotically optimal
The Power of Centralized PC Systems of Pushdown Automata
Parallel communicating systems of pushdown automata (PCPA) were introduced in
(Csuhaj-Varj{\'u} et. al. 2000) and in their centralized variants shown to be
able to simulate nondeterministic one-way multi-head pushdown automata. A
claimed converse simulation for returning mode (Balan 2009) turned out to be
incomplete (Otto 2012) and a language was suggested for separating these PCPA
of degree two (number of pushdown automata) from nondeterministic one-way
two-head pushdown automata. We show that the suggested language can be accepted
by the latter computational model. We present a different example over a single
letter alphabet indeed ruling out the possibility of a simulation between the
models. The open question about the power of centralized PCPA working in
returning mode is then settled by showing them to be universal. Since the
construction is possible using systems of degree two, this also improves the
previous bound three for generating all recursively enumerable languages.
Finally PCPAs are restricted in such a way that a simulation by multi-head
automata is possible
Snapping Graph Drawings to the Grid Optimally
In geographic information systems and in the production of digital maps for
small devices with restricted computational resources one often wants to round
coordinates to a rougher grid. This removes unnecessary detail and reduces
space consumption as well as computation time. This process is called snapping
to the grid and has been investigated thoroughly from a computational-geometry
perspective. In this paper we investigate the same problem for given drawings
of planar graphs under the restriction that their combinatorial embedding must
be kept and edges are drawn straight-line. We show that the problem is NP-hard
for several objectives and provide an integer linear programming formulation.
Given a plane graph G and a positive integer w, our ILP can also be used to
draw G straight-line on a grid of width w and minimum height (if possible).Comment: Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016
Comment on "Conjectures on exact solution of three-dimensional (3D) simple orthorhombic Ising lattices" [arXiv:0705.1045]
It is shown that a recent article by Z.-D. Zhang [arXiv:0705.1045] is in
error and violates well-known theorems.Comment: LaTeX, 3 pages, no figures, submitted to Philosophical Magazine.
Expanded versio
Remarks on separating words
The separating words problem asks for the size of the smallest DFA needed to
distinguish between two words of length <= n (by accepting one and rejecting
the other). In this paper we survey what is known and unknown about the
problem, consider some variations, and prove several new results
- …